Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lowering: Don't implicitly @nospecialize macros #57782

Closed

Conversation

topolarity
Copy link
Member

@topolarity topolarity commented Mar 15, 2025

Inspired by #57781 (kudos to @nsajko for another good find)

There's not much purpose of this de-specialization, since macros are generally excluded from inference / compilation anyway (except for #57833 which was a bug).

Since our AST is already relatively homogeneous, it's not clear that we gained much by this de-specialization when it was used. Arbitrarily de-specializing a function also often leads to bad (wide) edges in inference etc., causing frequent invalidations.

@topolarity topolarity force-pushed the ct/specialize-macrocall branch from dfdd613 to 75c3101 Compare March 15, 2025 13:31
@topolarity topolarity added backport 1.10 Change should be backported to the 1.10 release backport 1.11 Change should be backported to release-1.11 invalidations backport 1.12 Change should be backported to release-1.12 labels Mar 15, 2025
@vtjnash
Copy link
Member

vtjnash commented Mar 15, 2025

only ever invoked de-specialized MethodInstances

That was sort of the point though, since inference is supposed to reject these also anyways, so attempted specialization here is expected to be just wasted effort

@topolarity
Copy link
Member Author

inference is supposed to reject these also anyways, so attempted specialization here is expected to be just wasted effort

Why the invalidations in #57781 then?

@nsajko
Copy link
Contributor

nsajko commented Mar 19, 2025

This PR doesn't seem to prevent any invalidation - the invalidations are exactly the same before and after applying this change on top of current master, 6eef79c.

@nsajko
Copy link
Contributor

nsajko commented Mar 19, 2025

Furthermore, I can confirm the invalidation of macro MethodInstances, such as "MethodInstance for var\"@ip_str\"(::LineNumberNode, ::Module, ::Any)", is still present on master (and with this PR).

@JeffBezanson
Copy link
Member

only ever invoked de-specialized MethodInstances

I'm not sure why the code is written this way (seems to be from c79e34c) but I don't think this is true? This code path does end up doing specialization. But then there is a check for functions starting with '@', and we skip inference in that case. But then I'm not sure where the backedges come from. There must be another code path that ends up invoking inference. But anyway, you could try disabling the '@' check in gf.c.

@JeffBezanson
Copy link
Member

Oh we also insert nospecialize on macro arguments in lowering.

@topolarity
Copy link
Member Author

Oh we also insert nospecialize on macro arguments in lowering.

Ah yep - Looks like that was auto-qualifying any macro for the "single specialization" pre-compile hint: #57833

This was referenced Mar 20, 2025
@topolarity topolarity force-pushed the ct/specialize-macrocall branch from 75c3101 to 8d1ce02 Compare March 25, 2025 23:16
There's not much purpose of this de-specialization, since macros are
generally excluded from inference / compilation anyway (except for
JuliaLang#57833 which was a bug)

Since our AST is already relatively homogeneous, it's not clear that
we gained much by this de-specialization when it was used. Arbitrarily
de-specializing a function also often leads to bad (wide) edges in
inference etc., causing frequent invalidations.
@topolarity topolarity force-pushed the ct/specialize-macrocall branch from 8d1ce02 to 9637b0e Compare March 25, 2025 23:16
@topolarity topolarity changed the title macrocall: Specialize macro invocations like normal function calls lowering: Don't implicitly @nospecialize macros Mar 25, 2025
@topolarity
Copy link
Member Author

I've updated this to implement the suggestion at #57833 (comment)

We can land this PR, #57833, or both.

@nsajko
Copy link
Contributor

nsajko commented Mar 26, 2025

Comparing the current master (3360a44) and the current state of this PR (9637b0e):

  • invalidation on loading package TypeDomainNaturalNumbers v7.0.0
    • the number of invalidations drops from 725 to 662 🚀
    • example eliminated tree of invalidations:
      {
          "type": "Tuple{Type{Int64}, Integer}",
          "tree": {
              "method_instance": {
                  "method": "reverse!(v::AbstractVector, start::Integer, stop::Integer) @ Base array.jl:2227",
                  "method_instance": "MethodInstance for reverse!(::AbstractVector, ::Integer, ::Integer)"
              },
              "children": [
                  {
                      "method_instance": {
                          "method": "_reverse!(A::AbstractVector, ::Colon) @ Base array.jl:2182",
                          "method_instance": "MethodInstance for Base._reverse!(::AbstractVector, ::Colon)"
                      },
                      "children": [
                          {
                              "method_instance": {
                                  "method": "var\"#reverse!#88\"(dims, ::typeof(reverse!), A::AbstractVector) @ Base array.jl:2181",
                                  "method_instance": "MethodInstance for Base.var\"#reverse!#88\"(::Colon, ::typeof(reverse!), ::AbstractVector)"
                              },
                              "children": [
                                  {
                                      "method_instance": {
                                          "method": "reverse!(A::AbstractVector; dims) @ Base array.jl:2181",
                                          "method_instance": "MethodInstance for reverse!(::AbstractVector)"
                                      },
                                      "children": [
                                          {
                                              "method_instance": {
                                                  "method": "var\"@commutative\"(__source__::LineNumberNode, __module__::Module, myexpr) @ LinearAlgebra ~/tmp/jl/jl/nightly/share/julia/stdlib/v1.13/LinearAlgebra/src/special.jl:82",
                                                  "method_instance": "MethodInstance for LinearAlgebra.var\"@commutative\"(::LineNumberNode, ::Module, ::Any)"
                                              },
                                              "children": [
                                              ]
                                          }
                                      ]
                                  }
                              ]
                          }
                      ]
                  }
              ]
          }
      }
    • full data

@vtjnash
Copy link
Member

vtjnash commented Mar 26, 2025

I am somewhat opposed to this PR, unless it can be shown conclusively that it does not harm existing code. Making it so it has to compile different copies of this function for different syntax forms is not generally a desirable feature for macros

@KristofferC
Copy link
Member

unless it can be shown conclusively that it does not harm existing code.

How would that be done?

@topolarity
Copy link
Member Author

topolarity commented Mar 26, 2025

We could alternatively auto-insert a more precise type like Union{Bool,Int,Float64,String,Symbol,QuoteNode,Expr} when the user didn't provide any. That's at least much more precise than Any (no point in forcing inference to explore / create edges / codegen for types it will never see)

But also since we're going to land #57833 I thought this only affects explicit precompile statements anyway

topolarity added a commit that referenced this pull request Mar 26, 2025
…ion (#57833)

Despite disabling these from being compiled in `gf.c` for dynamic
invocations, the pre-compilation code was adding `macro` Methods anyway
to the workqueue.

Replaces #57782
@JeffBezanson
Copy link
Member

I guess any potential problem with this would show up as precompile time?

@vtjnash
Copy link
Member

vtjnash commented Mar 28, 2025

It'd show up as a failure to be able to generate a unspecialized precompile for macros and excess compilation (latency) when using macros (since we don't normally infer them, it would always be wasted time)

@topolarity
Copy link
Member Author

(since we don't normally infer them, it would always be wasted time)

Are you saying that the pre-compiled code would never be invoked for some reason?

I'm still of the mindset that asking inference to infer poorly is worse (more edges, often more dynamic dispatch, etc.) than either asking it not to infer at all or to infer with normal quality

@vtjnash
Copy link
Member

vtjnash commented Mar 28, 2025

Are you saying that the pre-compiled code would never be invoked for some reason?

That is possible. You'd need to generate pre-compiled for each possible combination of values that the user might splice into the macro expression, which generally just means precompile would refuse to generate anything

I'm still of the mindset that asking inference to infer poorly is worse (more edges, often more dynamic dispatch, etc.) than either asking it not to infer at all or to infer with normal quality

The normal behavior is to skip inference of these because of that. This PR makes the situation worse though, since now on top of refusing to infer them, you're also refusing to re-use existing code (because you're creating a situation where it is assumed to be inferred, which was previously not assumed).

@topolarity
Copy link
Member Author

you're also refusing to re-use existing code (because you're creating a situation where it is assumed to be inferred, which was previously not assumed).

I don't really understand what this means - why does specialization affect whether we assume something is inferred, or whether we re-use code?

which generally just means precompile would refuse to generate anything

I understand this to mean we generally won't generate any code, so I don't understand what code we're refusing to re-use

Anyway, what do you think of the Union{Bool,Int,Float64,String,Symbol,QuoteNode,Expr} solution?

P.S. I don't really feel strongly about this issue - just asking to develop my general understanding

@vtjnash
Copy link
Member

vtjnash commented Mar 28, 2025

I don't really understand what this means - why does specialization affect whether we assume something is inferred, or whether we re-use code?

We cannot infer something for Any unless that is marked nospecialize, we also cannot reuse the code if the types change if they aren't marked nospecialize

Anyway, what do you think of the Union{Bool,Int,Float64,String,Symbol,QuoteNode,Expr} solution?

In practice, that list is likely to be incomplete (at least Float32, UInt8/16/32/64/128, Int128, and BigInt are missing now) and could be also not stable since we could change JuliaSyntax to put more expressive nodes than simply Expr as a catch-all, or even because macros-writing-macros could put any sort of literal (such as a function object) there

julia> eval(:(@eval $stdout))
Base.TTY(RawFD(15) open, 0 bytes waiting)

KristofferC pushed a commit that referenced this pull request Mar 31, 2025
…ion (#57833)

Despite disabling these from being compiled in `gf.c` for dynamic
invocations, the pre-compilation code was adding `macro` Methods anyway
to the workqueue.

Replaces #57782

(cherry picked from commit 6ce51d3)
This was referenced Mar 31, 2025
KristofferC pushed a commit that referenced this pull request Mar 31, 2025
…ion (#57833)

Despite disabling these from being compiled in `gf.c` for dynamic
invocations, the pre-compilation code was adding `macro` Methods anyway
to the workqueue.

Replaces #57782

(cherry picked from commit 6ce51d3)
KristofferC pushed a commit that referenced this pull request Mar 31, 2025
…ion (#57833)

Despite disabling these from being compiled in `gf.c` for dynamic
invocations, the pre-compilation code was adding `macro` Methods anyway
to the workqueue.

Replaces #57782

(cherry picked from commit 6ce51d3)
topolarity added a commit that referenced this pull request Apr 1, 2025
…ion (#57833)

Despite disabling these from being compiled in `gf.c` for dynamic
invocations, the pre-compilation code was adding `macro` Methods anyway
to the workqueue.

Replaces #57782
topolarity added a commit that referenced this pull request Apr 1, 2025
…ion (#57833)

Despite disabling these from being compiled in `gf.c` for dynamic
invocations, the pre-compilation code was adding `macro` Methods anyway
to the workqueue.

Replaces #57782
@topolarity
Copy link
Member Author

topolarity commented Apr 1, 2025

Thanks for the explanations @vtjnash

My vote is still that de-specialized inference here is not very useful (adds spurious invalidations and fails to pre-compile most of the code you were hoping to "re-use"), but I also agree that the union-split route is incompatible with direct macrocall interpolation so that won't work either.

I'm closing this for now in favor of the status quo - #57833 fixed the practical issues for now anyway

@topolarity topolarity closed this Apr 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport 1.10 Change should be backported to the 1.10 release backport 1.11 Change should be backported to release-1.11 backport 1.12 Change should be backported to release-1.12 invalidations
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants